Goto

Collaborating Authors

 in-domain data



Scaling Laws for Code: A More Data-Hungry Regime

Luo, Xianzhen, Zheng, Wenzhen, Zhu, Qingfu, Zhang, Rongyi, Li, Houyi, Huang, Siming, Fan, YuanTao, Che, Wanxiang

arXiv.org Artificial Intelligence

Code Large Language Models (LLMs) are revolutionizing software engineering. However, scaling laws that guide the efficient training are predominantly analyzed on Natural Language (NL). Given the fundamental differences like strict syntax between code and NL, it is unclear whether these laws are directly applicable to code. To address this gap, we conduct the first large-scale empirical study of scaling laws for code, comprising 117 experimental runs with model sizes from 0.2B to 3.8B and training tokens from 2B to 128B. We fit the Chinchilla law and the Farsser law. First, the results show that the more expressive Farseer law offers greater accuracy. Second, the analysis reveals that Code LLMs scale effectively with model size. Crucially, code represents a more data-hungry regime, requiring a substantially higher data-to-parameter ratio than NL. Finally, two additional sets of experiments on code-NL mixtures show that NL benefits resource-constrained scenarios, but becomes a detriment at higher compute budgets.


A Appendix

Neural Information Processing Systems

Hyper-parameter Setup The pre-training hyper-parameters of Transcormer are described in Table 8. As mentioned in Section 2.1, some works [ MLM model caused by N-passes. K tokens via masked prediction as the final sentence probability. To fulfill this target, DLM only feeds word embeddings as the key/value for each Transformer layer, rather than the previous layer. Just as discussed in Section 3.3, this model learns forward and backward A.3 Results A.3.1 Comparison with other works As aforementioned, previous works [35, 34] have tried some strategies to calculate the probabilities MLM adopts one bidirectional context and SLM adopts forward and backward contexts.


Demonstrating Multi-Suction Item Picking at Scale via Multi-Modal Learning of Pick Success

Wang, Che, van Baar, Jeroen, Mitash, Chaitanya, Li, Shuai, Randle, Dylan, Wang, Weiyao, Sontakke, Sumedh, Bekris, Kostas E., Katyal, Kapil

arXiv.org Artificial Intelligence

This work demonstrates how autonomously learning aspects of robotic operation from sparsely-labeled, real-world data of deployed, engineered solutions at industrial scale can provide with solutions that achieve improved performance. Specifically, it focuses on multi-suction robot picking and performs a comprehensive study on the application of multi-modal visual encoders for predicting the success of candidate robotic picks. Picking diverse items from unstructured piles is an important and challenging task for robot manipulation in real-world settings, such as warehouses. Methods for picking from clutter must work for an open set of items while simultaneously meeting latency constraints to achieve high throughput. The demonstrated approach utilizes multiple input modalities, such as RGB, depth and semantic segmentation, to estimate the quality of candidate multi-suction picks. The strategy is trained from real-world item picking data, with a combination of multimodal pretrain and finetune. The manuscript provides comprehensive experimental evaluation performed over a large item-picking dataset, an item-picking dataset targeted to include partial occlusions, and a package-picking dataset, which focuses on containers, such as boxes and envelopes, instead of unpackaged items. The evaluation measures performance for different item configurations, pick scenes, and object types. Ablations help to understand the effects of in-domain pretraining, the impact of different modalities and the importance of finetuning. These ablations reveal both the importance of training over multiple modalities but also the ability of models to learn during pretraining the relationship between modalities so that during finetuning and inference, only a subset of them can be used as input.


A Taxonomy for Evaluating Generalist Robot Policies

Gao, Jensen, Belkhale, Suneel, Dasari, Sudeep, Balakrishna, Ashwin, Shah, Dhruv, Sadigh, Dorsa

arXiv.org Artificial Intelligence

--Machine learning for robotics promises to unlock generalization to novel tasks and environments. Guided by this promise, many recent works have focused on scaling up robot data collection and developing larger, more expressive policies to achieve this. But how do we measure progress towards this goal of policy generalization in practice? Evaluating and quantifying generalization is the Wild West of modern robotics, with each work proposing and measuring different types of generalization in their own, often difficult to reproduce, settings. In this work, our goal is (1) to outline the forms of generalization we believe are important in robot manipulation in a comprehensive and fine-grained manner, and (2) to provide reproducible guidelines for measuring these notions of generalization. We first propose - Gen, a taxonomy of generalization for robot manipulation structured around visual, semantic, and behavioral generalization. We discuss how our taxonomy encompasses most prior notions of generalization in robotics. Next, we instantiate -Gen with a concrete real-world benchmark based on the widely-used Bridge V2 dataset. We evaluate a variety of state-of-the-art models on this benchmark to demonstrate the utility of our taxonomy in practice. Our taxonomy of generalization can yield many interesting insights into existing models: for example, we observe that current vision-language-action models struggle with various types of semantic generalization, despite the promise of pre-training on internet-scale language datasets. We believe -Gen and our guidelines can improve the dissemination and evaluation of progress towards generalization in robotics, which we hope will guide model design and future data collection efforts. We provide videos and demos at our website stargen-taxonomy.github.io. Learning-based robotics often comes with the promise of generalization. As an example, an ambitious goal is to train a policy on diverse household data so it can enter a new home and fold laundry. This vision has led to many recent works that train robot policies on diverse datasets via imitation learning [1-13] with the hope of broad generalization. For example, if a robot encounters an unseen item of clothing in a new home, it should infer how to fold it using its extensive prior experience. However, in contrast to other domains like language and vision, we have yet to reach a point in robotics where policies can reliably generalize in this manner. In pursuit of reliable and broad generalization, recent work has focused on scaling up data collection [2-4, 14-20] and developing more expressive models [3, 7-13], following the successes of other machine learning fields. Although these advances have led to more capable policies that certainly generalize to some novel scenarios, it is often unclear from existing evaluations how generalist these policies truly are.


Vulnerability of Text-to-Image Models to Prompt Template Stealing: A Differential Evolution Approach

Wu, Yurong, Mu, Fangwen, Zhang, Qiuhong, Zhao, Jinjing, Xu, Xinrun, Mei, Lingrui, Wu, Yang, Shi, Lin, Wang, Junjie, Ding, Zhiming, Wang, Yiwei

arXiv.org Artificial Intelligence

Prompt trading has emerged as a significant intellectual property concern in recent years, where vendors entice users by showcasing sample images before selling prompt templates that can generate similar images. This work investigates a critical security vulnerability: attackers can steal prompt templates using only a limited number of sample images. To investigate this threat, we introduce Prism, a prompt-stealing benchmark consisting of 50 templates and 450 images, organized into Easy and Hard difficulty levels. To identify the vulnerabity of VLMs to prompt stealing, we propose EvoStealer, a novel template stealing method that operates without model fine-tuning by leveraging differential evolution algorithms. The system first initializes population sets using multimodal large language models (MLLMs) based on predefined patterns, then iteratively generates enhanced offspring through MLLMs. During evolution, EvoStealer identifies common features across offspring to derive generalized templates. Our comprehensive evaluation conducted across open-source (INTERNVL2-26B) and closed-source models (GPT-4o and GPT-4o-mini) demonstrates that EvoStealer's stolen templates can reproduce images highly similar to originals and effectively generalize to other subjects, significantly outperforming baseline methods with an average improvement of over 10%. Moreover, our cost analysis reveals that EvoStealer achieves template stealing with negligible computational expenses. Our code and dataset are available at https://github.com/whitepagewu/evostealer.


Leveraging Prior Experience: An Expandable Auxiliary Knowledge Base for Text-to-SQL

Chu, Zhibo, Wang, Zichong, Qin, Qitao

arXiv.org Artificial Intelligence

Large Language Models (LLMs) exhibit impressive problem-solving skills across many tasks, but they still underperform compared to humans in various downstream applications, such as text-to-SQL. On the BIRD benchmark leaderboard, human performance achieves an accuracy of 92.96\%, whereas the top-performing method reaches only 72.39\%. Notably, these state-of-the-art (SoTA) methods predominantly rely on in-context learning to simulate human-like reasoning. However, they overlook a critical human skill: continual learning. Inspired by the educational practice of maintaining mistake notebooks during our formative years, we propose LPE-SQL (Leveraging Prior Experience: An Expandable Auxiliary Knowledge Base for Text-to-SQL), a novel framework designed to augment LLMs by enabling continual learning without requiring parameter fine-tuning. LPE-SQL consists of four modules that \textbf{i)} retrieve relevant entries, \textbf{ii)} efficient sql generation, \textbf{iii)} generate the final result through a cross-consistency mechanism and \textbf{iv)} log successful and failed tasks along with their reasoning processes or reflection-generated tips. Importantly, the core module of LPE-SQL is the fourth one, while the other modules employ foundational methods, allowing LPE-SQL to be easily integrated with SoTA technologies to further enhance performance. Our experimental results demonstrate that this continual learning approach yields substantial performance gains, with the smaller Llama-3.1-70B model with surpassing the performance of the larger Llama-3.1-405B model using SoTA methods.


Generating Data with Text-to-Speech and Large-Language Models for Conversational Speech Recognition

Cornell, Samuele, Darefsky, Jordan, Duan, Zhiyao, Watanabe, Shinji

arXiv.org Artificial Intelligence

Currently, a common approach in many speech processing tasks is to leverage large scale pre-trained models by fine-tuning them on in-domain data for a particular application. Yet obtaining even a small amount of such data can be problematic, especially for sensitive domains and conversational speech scenarios, due to both privacy issues and annotation costs. To address this, synthetic data generation using single speaker datasets has been employed. Yet, for multi-speaker cases, such an approach often requires extensive manual effort and is prone to domain mismatches. In this work, we propose a synthetic data generation pipeline for multi-speaker conversational ASR, leveraging a large language model (LLM) for content creation and a conversational multi-speaker text-to-speech (TTS) model for speech synthesis. We conduct evaluation by fine-tuning the Whisper ASR model for telephone and distant conversational speech settings, using both in-domain data and generated synthetic data. Our results show that the proposed method is able to significantly outperform classical multi-speaker generation approaches that use external, non-conversational speech datasets.


Task Oriented In-Domain Data Augmentation

Liang, Xiao, Hu, Xinyu, Zuo, Simiao, Gong, Yeyun, Lou, Qiang, Liu, Yi, Huang, Shao-Lun, Jiao, Jian

arXiv.org Artificial Intelligence

Large Language Models (LLMs) have shown superior performance in various applications and fields. To achieve better performance on specialized domains such as law and advertisement, LLMs are often continue pre-trained on in-domain data. However, existing approaches suffer from two major issues. First, in-domain data are scarce compared to general domain-agnostic data. Second, data used for continual pre-training are not task-aware, such that they may not be helpful to downstream applications. We propose TRAIT, a task-oriented in-domain data augmentation framework. Our framework is divided into two parts: in-domain data selection and task-oriented synthetic passage generation. The data selection strategy identifies and selects a large amount of in-domain data from general corpora, and thus significantly enriches domain knowledge in the continual pre-training data. The synthetic passages contain guidance on how to use domain knowledge to answer questions about downstream tasks. We adapt LLMs to two domains: advertisement and math. On average, TRAIT improves LLM performance by 8% in the advertisement domain and 7.5% in the math domain. Large language models (LLMs) have achieved significant performance improvements in various applications such as language modeling (Brown et al., 2020; Touvron et al., 2023; Chowdhery et al., 2023) and visual understanding (Radford et al., 2021). They have also shown superior performance in fields such as finance (Xie et al., 2023b), e-commerce (Ma et al., 2023) and healthcare (Bakhshandeh, 2023). However, the models are usually trained on a large amount of general domain-agnostic data, such as web corpora. Because of the lack of domain-specific training, LLMs suffer from subpar performance when directly applied to certain domains such as advertisement. To adapt LLMs to a specific domain, continual pre-training methods (Gururangan et al., 2020) are commonly applied. In particular, the LLM is continual pre-trained on in-domain corpora, such that it can acquire domain knowledge and better adapt to downstream tasks.


Skywork: A More Open Bilingual Foundation Model

Wei, Tianwen, Zhao, Liang, Zhang, Lichang, Zhu, Bo, Wang, Lijie, Yang, Haihua, Li, Biye, Cheng, Cheng, Lü, Weiwei, Hu, Rui, Li, Chenxia, Yang, Liu, Luo, Xilin, Wu, Xuejie, Liu, Lunan, Cheng, Wenjun, Cheng, Peng, Zhang, Jianhao, Zhang, Xiaoyu, Lin, Lei, Wang, Xiaokun, Ma, Yutuan, Dong, Chuanhai, Sun, Yanqi, Chen, Yifu, Peng, Yongyi, Liang, Xiaojuan, Yan, Shuicheng, Fang, Han, Zhou, Yahui

arXiv.org Artificial Intelligence

In this technical report, we present Skywork-13B, a family of large language models (LLMs) trained on a corpus of over 3.2 trillion tokens drawn from both English and Chinese texts. This bilingual foundation model is the most extensively trained and openly published LLMs of comparable size to date. We introduce a two-stage training methodology using a segmented corpus, targeting general purpose training and then domain-specific enhancement training, respectively. We show that our model not only excels on popular benchmarks, but also achieves \emph{state of the art} performance in Chinese language modeling on diverse domains. Furthermore, we propose a novel leakage detection method, demonstrating that test data contamination is a pressing issue warranting further investigation by the LLM community. To spur future research, we release Skywork-13B along with checkpoints obtained during intermediate stages of the training process. We are also releasing part of our SkyPile corpus, a collection of over 150 billion tokens of web text, which is the largest high quality open Chinese pre-training corpus to date. We hope Skywork-13B and our open corpus will serve as a valuable open-source resource to democratize access to high-quality LLMs.